Learning with smooth Hinge losses

نویسندگان

چکیده

Due to the non-smoothness of Hinge loss in SVM, it is difficult obtain a faster convergence rate with modern optimization algorithms. In this paper, we introduce two smooth losses ψG(α;σ) and ψM(α;σ) which are infinitely differentiable converge uniformly α as σ tends 0. By replacing these losses, support vector machines (SSVMs), respectively. Solving SSVMs Trust Region Newton method (TRON) leads quadratically convergent Experiments text classification tasks show that proposed effective real-world applications. We also general convex function unify several commonly-used functions machine learning. The framework provides approximation non-smooth functions, can be used models solved

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Submodular Losses with the Lovasz Hinge

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...

متن کامل

Multicategory large margin classification methods: Hinge losses vs. coherence functions

Article history: Received 3 August 2013 Received in revised form 9 May 2014 Accepted 16 June 2014 Available online 20 June 2014

متن کامل

Online learning with kernel losses

We present a generalization of the adversarial linear bandits framework, where the underlying losses are kernel functions (with an associated reproducing kernel Hilbert space) rather than linear functions. We study a version of the exponential weights algorithm and bound its regret in this setting. Under conditions on the eigen-decay of the kernel we provide a sharp characterization of the regr...

متن کامل

The Lovász Hinge: A Convex Surrogate for Submodular Losses

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. Ho...

متن کامل

Fast learning rates with heavy-tailed losses

We study fast learning rates when the losses are not necessarily bounded and may have a distribution with heavy tails. To enable such analyses, we introduce two new conditions: (i) the envelope function supf∈F |` ◦ f |, where ` is the loss function and F is the hypothesis class, exists and is L-integrable, and (ii) ` satisfies the multi-scale Bernstein’s condition on F . Under these assumptions...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2021

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.08.060